MX-HPC online interactive training on Zoom

This online training introduce MX software in an HPC environment intended for structural biologists new to HPC. The ideal number of participants is one to five and PReSTO membership is obligatory prior to the event.

PReSTO membership

Please join PReSTO by following the steps under Academic PReSTO access open to academic structural biologists with Swedish research projects.

Session duration

Multiple 1-2 hours online sessions until curriculum is covered

Request training

Send an email to Martin Moche and ask for a 2 hour startup session

Training procedures

Screen sharing at NSC Tetralith/LUNARC Cosmos/MAXIV offline cluster and demo. Participants are encouraged to bring their own data

Curriculum - what to demo and discuss:

We demo and get you started with a selected set of MX software listed below. Structural biology software in PReSTO.

1. xds, xdsapp, dials, xdsgui, autoPROC (compute node)
2. high-resolution xdsapp prior to staraniso web server (compute node)
3. ccp4i, ccp4i2(login node except phaser/buccaneer) and coot (login node)
4. pymol (login node) how to make presentation videos, display electron density maps, label residues
5. phenix-GUI at login node launches jobs to compute nodes. SLURM settings for long/short jobs
6. sharp, buster (compute node)
7. hkl2map (compute node)
8. adxv och albula (login node)
9. uppsala software factory (login node)
10. Issues and workarounds
  a. Avoid DIALS out-of-memory error message
  b. Electrostatics surfaces disabled in free PyMOL (to be addressed in PReSTO)
11. Structural biology software not included in demo 
  a. chimera, chimeraX, ccp4mg (login node & PReSTO menu)
  b. relion with gpu support (cryo-EM software)
  c. cryo-EM software
  d. xfel software (nxds, crystfel)
  e. imosflm

Managing the HPC environment:

A) Compute and login nodes
B) Launch of default/latest version from terminal window or presto menu
C) Handling the SLURM queue with sbatch, squeue, etc
D) Interactive command and development nodes
E) NSC Tetralith special commands lastjobs, jobsh, storagequota
F) /home/x_user vs /proj/xray/users/x_user at NSC Tetralith
G) Snapshot file recovery
H) Pros and Cons of NAISS systems NSC Tetralith, Berzelius vs MAXIV offline cluster
I) Basic job monitoring "top -u x_user"

How to get compute time and storage at NAISS via SUPR

NAISS former Snic User and Project Repository (SUPR) is where to apply for compute and storage at NAISS resources

i. How to apply for NAISS Small Compute ii. How to apply for NAISS Small Storage


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express